Goto

Collaborating Authors

 pre-trained diffusion model









DreamSparse: EscapingfromPlato'sCavewith2D DiffusionModelGivenSparseViews

Neural Information Processing Systems

Recent works [76, 33, 70, 72, 17, 7, 66, 20] started to explore sparse-view novel view synthesis, specifically focusing on generating novel views from alimited number of input images (typically 2-3) with known camera poses. Some of them [33,70,72,17,7] introduce additional priors into NeRF, e.g.


One-Step Diffusion Distillation through Score Implicit Matching

Neural Information Processing Systems

Despite their strong performances on many generative tasks, diffusion models require a large number of sampling steps in order to generate realistic samples. This has motivated the community to develop effective methods to distill pre-trained diffusion models into more efficient models, but these methods still typically require few-step inference or perform substantially worse than the underlying model.


Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models

Neural Information Processing Systems

Due to the ease of training, ability to scale, and high sample quality, diffusion models (DMs) have become the preferred option for generative modeling, with numerous pre-trained models available for a wide variety of datasets. Containing intricate information about data distributions, pre-trained DMs are valuable assets for downstream applications. In this work, we consider learning from pre-trained DMs and transferring their knowledge to other generative models in a data-free fashion. Specifically, we propose a general framework called Diff-Instruct to instruct the training of arbitrary generative models as long as the generated samples are differentiable with respect to the model parameters. Our proposed Diff-Instruct is built on a rigorous mathematical foundation where the instruction process directly corresponds to minimizing a novel divergence we call Integral Kullback-Leibler (IKL) divergence.